Massively Parallel Architectures for Al: Netl, Thistle, a N D Boltzmann Machines Families of Parallel Architectures

نویسنده

  • Terrence J. Sejnowski
چکیده

It is becoming increasingly apparent that some aspects of intelligent behavior require enormous computational power and that some sort of massively parallel computing architecture is the most plausible way to deliver sueh power. Parallelism, rather than raw speed of the computing elements. seems to be the way that the brain gets such jobs done. But even if the need for massive parallelism is admitted, there is still the question of what kind of parallel architecture best fits the needs of various A1 tasks. In this paper we will attempt to isolate a number of basic computational tasks that an intelligent system must perform. We will describe several families of massively parallel computing architectures, and we will see which of these computational tasks can be handled by each of these families. In particular, we will describe a new architecture, which we call the Boltzmann machine, whose abilities appear to include a number of tasks that are inefficient or impossible on the other architectures. FAMILIES OF PARALLEL ARCHITECTURES By "massively parallel" architectures, we mean machines with a very large number of processing elements (perhaps very simple ones) working on a single task. A massively parallel system may be complete and self-contained or it may be a special-purpose device, performing some particular taqk as part of a larger system that contains other modules of a different character. In this paper we will focus on the computation performed by a single parallel module, ignoring the issue of how to integrate a collection of modules into a complete system. One usehl way of classifying these massively parallel architectures is by the type of signal that is passed among the elements. Fahlman (1982) proposes a division of these systems into three classes: marker-passing, value-passing, and message-passing systems. Message-passing systems are the most powerful family, and by far the most complex. They pass around messages of arbitrary complexity, and perform complex operations on these messages. Such generality has its price: the individual computing elements are complex, the communication costs are high, and there may be severe contention and traffic congestion problems in the network. Message passing does not seem plausible as a detailed model of I processing in the brain. Such models are being actively studied elsewhere (Hillis, 1981; Hewitt, 1980) and we have nothing more to say about them here. ,. Marker-passing systems, of which NFTL (Fahlma?, 1979) is an example. are the simplest I a I I family and the most limited. In such systems, the communication among processing

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Massively Parallel Architectures for AI: NETL, Thistle, and Boltzmann Machines

It is becoming increasingly apparent that some aspects of intelligent behavior rcquirc enormous computational power and that some sort of massively parallel computing architecture is the most plausible way to deliver such power. Parallelism, rather than raw speed of the computing elements. seems to be the way that the brain gets such jobs done. But even if the need for massive parallelism is ad...

متن کامل

Parallelism in Inheritance Hierarchies with Exceptions

In a recent paper, Etherington & Reiter formalized a simple version of semantic networks with exceptions in terms of Reiter's Default Logic. With this approach they were able to formally characterize the correctness of an inference algorithm in terms of Default Logic, and exhibited an algorithm that was correct in this sense. Finally, they concluded that massively parallel architectures for sem...

متن کامل

Optimization of Multi-Phase Compressible Lattice Boltzmann Codes on Massively Parallel Multi-Core Systems

We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of the available parallelism, and derive appropriate solutions. We obtain a sustained performance for ...

متن کامل

Hybrid Parallelization Techniques for Lattice Boltzmann Free Surface Flows

In the following, we will present an algorithm to perform adaptive free surface simulations with the lattice Boltzmann method (LBM) on machines with shared and distributed memory architectures. Performance results for different test cases and architectures will be given. The algorithm for parallelization yields a high performance, and can be combined with the adaptive LBM simulations. Moreover,...

متن کامل

C++ and Massively Parallel Computers

Our goal is to apply the software engineering advantages of object-oriented programming to the raw power of massively parallel architectures. To do this we have constructed a hierarchy of C++ classes to support the data-parallel paradigm. Feasibility studies and initial coding can be supported by any serial machine that has a C++ compiler. Parallel execution requires an extended Cfront, which u...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1983